83 research outputs found

    Avoiding catastrophic failure in correlated networks of networks

    Get PDF
    Networks in nature do not act in isolation but instead exchange information, and depend on each other to function properly. An incipient theory of Networks of Networks have shown that connected random networks may very easily result in abrupt failures. This theoretical finding bares an intrinsic paradox: If natural systems organize in interconnected networks, how can they be so stable? Here we provide a solution to this conundrum, showing that the stability of a system of networks relies on the relation between the internal structure of a network and its pattern of connections to other networks. Specifically, we demonstrate that if network inter-connections are provided by hubs of the network and if there is a moderate degree of convergence of inter-network connection the systems of network are stable and robust to failure. We test this theoretical prediction in two independent experiments of functional brain networks (in task- and resting states) which show that brain networks are connected with a topology that maximizes stability according to the theory.Comment: 40 pages, 7 figure

    Emergent complex neural dynamics

    Full text link
    A large repertoire of spatiotemporal activity patterns in the brain is the basis for adaptive behaviour. Understanding the mechanism by which the brain's hundred billion neurons and hundred trillion synapses manage to produce such a range of cortical configurations in a flexible manner remains a fundamental problem in neuroscience. One plausible solution is the involvement of universal mechanisms of emergent complex phenomena evident in dynamical systems poised near a critical point of a second-order phase transition. We review recent theoretical and empirical results supporting the notion that the brain is naturally poised near criticality, as well as its implications for better understanding of the brain

    Coupling and Elastic Loading Affect the Active Response by the Inner Ear Hair Cell Bundles

    Get PDF
    Active hair bundle motility has been proposed to underlie the amplification mechanism in the auditory endorgans of non-mammals and in the vestibular systems of all vertebrates, and to constitute a crucial component of cochlear amplification in mammals. We used semi-intact in vitro preparations of the bullfrog sacculus to study the effects of elastic mechanical loading on both natively coupled and freely oscillating hair bundles. For the latter, we attached glass fibers of different stiffness to the stereocilia and observed the induced changes in the spontaneous bundle movement. When driven with sinusoidal deflections, hair bundles displayed phase-locked response indicative of an Arnold Tongue, with the frequency selectivity highest at low amplitudes and decreasing under stronger stimulation. A striking broadening of the mode-locked response was seen with increasing stiffness of the load, until approximate impedance matching, where the phase-locked response remained flat over the physiological range of frequencies. When the otolithic membrane was left intact atop the preparation, the natural loading of the bundles likewise decreased their frequency selectivity with respect to that observed in freely oscillating bundles. To probe for signatures of the active process under natural loading and coupling conditions, we applied transient mechanical stimuli to the otolithic membrane. Following the pulses, the underlying bundles displayed active movement in the opposite direction, analogous to the twitches observed in individual cells. Tracking features in the otolithic membrane indicated that it moved in phase with the bundles. Hence, synchronous active motility evoked in the system of coupled hair bundles by external input is sufficient to displace large overlying structures

    Concurrence of form and function in developing networks and its role in synaptic pruning

    Get PDF
    A fundamental question in neuroscience is how structure and function of neural systems are related. We study this interplay by combining a familiar auto-associative neural network with an evolving mechanism for the birth and death of synapses. A feedback loop then arises leading to two qualitatively different types of behaviour. In one, the network structure becomes heterogeneous and dissasortative, and the system displays good memory performance; furthermore, the structure is optimised for the particular memory patterns stored during the process. In the other, the structure remains homogeneous and incapable of pattern retrieval. These findings provide an inspiring picture of brain structure and dynamics that is compatible with experimental results on early brain development, and may help to explain synaptic pruning. Other evolving networks—such as those of protein interactions—might share the basic ingredients for this feedback loop and other questions, and indeed many of their structural features are as predicted by our model.We are grateful for financial support from the Spanish MINECO (project of Excellence: FIS2017-84256-P) and from “Obra Social La Caixa”

    A New Measure of Centrality for Brain Networks

    Get PDF
    Recent developments in network theory have allowed for the study of the structure and function of the human brain in terms of a network of interconnected components. Among the many nodes that form a network, some play a crucial role and are said to be central within the network structure. Central nodes may be identified via centrality metrics, with degree, betweenness, and eigenvector centrality being three of the most popular measures. Degree identifies the most connected nodes, whereas betweenness centrality identifies those located on the most traveled paths. Eigenvector centrality considers nodes connected to other high degree nodes as highly central. In the work presented here, we propose a new centrality metric called leverage centrality that considers the extent of connectivity of a node relative to the connectivity of its neighbors. The leverage centrality of a node in a network is determined by the extent to which its immediate neighbors rely on that node for information. Although similar in concept, there are essential differences between eigenvector and leverage centrality that are discussed in this manuscript. Degree, betweenness, eigenvector, and leverage centrality were compared using functional brain networks generated from healthy volunteers. Functional cartography was also used to identify neighborhood hubs (nodes with high degree within a network neighborhood). Provincial hubs provide structure within the local community, and connector hubs mediate connections between multiple communities. Leverage proved to yield information that was not captured by degree, betweenness, or eigenvector centrality and was more accurate at identifying neighborhood hubs. We propose that this metric may be able to identify critical nodes that are highly influential within the network

    Do brain networks evolve by maximizing their information flow capacity?

    Get PDF
    We propose a working hypothesis supported by numerical simulations that brain networks evolve based on the principle of the maximization of their internal information flow capacity. We find that synchronous behavior and capacity of information flow of the evolved networks reproduce well the same behaviors observed in the brain dynamical networks of Caenorhabditis elegans and humans, networks of Hindmarsh-Rose neurons with graphs given by these brain networks. We make a strong case to verify our hypothesis by showing that the neural networks with the closest graph distance to the brain networks of Caenorhabditis elegans and humans are the Hindmarsh-Rose neural networks evolved with coupling strengths that maximize information flow capacity. Surprisingly, we find that global neural synchronization levels decrease during brain evolution, reflecting on an underlying global no Hebbian-like evolution process, which is driven by no Hebbian-like learning behaviors for some of the clusters during evolution, and Hebbian-like learning rules for clusters where neurons increase their synchronization
    corecore